Serveur d'exploration sur Mozart

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

A data model for music information retrieval

Identifieur interne : 000123 ( PascalFrancis/Corpus ); précédent : 000122; suivant : 000124

A data model for music information retrieval

Auteurs : Tamar Berman

Source :

RBID : Pascal:07-0534512

Descripteurs français

English descriptors

Abstract

This paper describes a data model for the representation of tonal music. In this model, music is conceived as an equally-spaced time series of 12-dimensional vectors. The model has been successfully applied to the task of discovering frequently recurring patterns, and to the related task of retrieving user-defined musical patterns. This was accomplished by converting midi sequences of music by W.A. Mozart into the time series representation and analyzing these with data mining tools and SQL queries. The novelty of the pattern extraction capability supported by the model is in the potentially complex description of the sequences, which may contain both melodic and harmonic features, may be embedded within each other, or interspersed with other patterns or occurrences. A unique feature of the model is the use of time intervals as the basic representational unit, which fosters possibilities for future application to audio data.

Notice en format standard (ISO 2709)

Pour connaître la documentation sur le format Inist Standard.

pA  
A01 01  1    @0 0302-9743
A05       @2 4032
A08 01  1  ENG  @1 A data model for music information retrieval
A09 01  1  ENG  @1 Next generation information technologies and systems : 6th international conference, NGITS 2006, Kibbutz Shefayim, Israel, July 4-6, 2006 : proceedings
A11 01  1    @1 BERMAN (Tamar)
A12 01  1    @1 ETZION (Opher) @9 ed.
A12 02  1    @1 KUFLIK (Tsvi) @9 ed.
A12 03  1    @1 MOTRO (Amihai) @9 ed.
A14 01      @1 Graduate School of Library and Information Science University of Illinois at Urbana-Champaign @2 Champaign, IL 61820 @3 USA @Z 1 aut.
A20       @1 165-173
A21       @1 2006
A23 01      @0 ENG
A26 01      @0 3-540-35472-7
A43 01      @1 INIST @2 16343 @5 354000153620160150
A44       @0 0000 @1 © 2007 INIST-CNRS. All rights reserved.
A45       @0 15 ref.
A47 01  1    @0 07-0534512
A60       @1 P @2 C
A61       @0 A
A64 01  1    @0 Lecture notes in computer science
A66 01      @0 DEU
C01 01    ENG  @0 This paper describes a data model for the representation of tonal music. In this model, music is conceived as an equally-spaced time series of 12-dimensional vectors. The model has been successfully applied to the task of discovering frequently recurring patterns, and to the related task of retrieving user-defined musical patterns. This was accomplished by converting midi sequences of music by W.A. Mozart into the time series representation and analyzing these with data mining tools and SQL queries. The novelty of the pattern extraction capability supported by the model is in the potentially complex description of the sequences, which may contain both melodic and harmonic features, may be embedded within each other, or interspersed with other patterns or occurrences. A unique feature of the model is the use of time intervals as the basic representational unit, which fosters possibilities for future application to audio data.
C02 01  X    @0 001D02B07D
C02 02  X    @0 001D02B07B
C02 03  X    @0 001B40C38
C03 01  X  FRE  @0 Système information @5 01
C03 01  X  ENG  @0 Information system @5 01
C03 01  X  SPA  @0 Sistema información @5 01
C03 02  X  FRE  @0 Recherche information @5 06
C03 02  X  ENG  @0 Information retrieval @5 06
C03 02  X  SPA  @0 Búsqueda información @5 06
C03 03  X  FRE  @0 Série temporelle @5 07
C03 03  X  ENG  @0 Time series @5 07
C03 03  X  SPA  @0 Serie temporal @5 07
C03 04  X  FRE  @0 Découverte connaissance @5 08
C03 04  X  ENG  @0 Knowledge discovery @5 08
C03 04  X  SPA  @0 Descubrimiento conocimiento @5 08
C03 05  X  FRE  @0 Analyse donnée @5 09
C03 05  X  ENG  @0 Data analysis @5 09
C03 05  X  SPA  @0 Análisis datos @5 09
C03 06  X  FRE  @0 Extraction information @5 10
C03 06  X  ENG  @0 Information extraction @5 10
C03 06  X  SPA  @0 Extracción información @5 10
C03 07  X  FRE  @0 Fouille donnée @5 11
C03 07  X  ENG  @0 Data mining @5 11
C03 07  X  SPA  @0 Busca dato @5 11
C03 08  X  FRE  @0 Interrogation base donnée @5 12
C03 08  X  ENG  @0 Database query @5 12
C03 08  X  SPA  @0 Interrogación base datos @5 12
C03 09  3  FRE  @0 Acoustique audio @5 13
C03 09  3  ENG  @0 Audio acoustics @5 13
C03 10  X  FRE  @0 Acoustique musicale @5 18
C03 10  X  ENG  @0 Musical acoustics @5 18
C03 10  X  SPA  @0 Acústica musical @5 18
C03 11  X  FRE  @0 Tonie @5 19
C03 11  X  ENG  @0 Pitch(acoustics) @5 19
C03 11  X  SPA  @0 Altura sonida @5 19
C03 12  X  FRE  @0 Musique @5 20
C03 12  X  ENG  @0 Music @5 20
C03 12  X  SPA  @0 Música @5 20
C03 13  X  FRE  @0 Comportement utilisateur @5 21
C03 13  X  ENG  @0 User behavior @5 21
C03 13  X  SPA  @0 Comportamiento usuario @5 21
C03 14  3  FRE  @0 SQL @5 22
C03 14  3  ENG  @0 SQL @5 22
C03 15  3  FRE  @0 Modèle donnée @5 23
C03 15  3  ENG  @0 Data models @5 23
C03 16  X  FRE  @0 Modélisation @5 24
C03 16  X  ENG  @0 Modeling @5 24
C03 16  X  SPA  @0 Modelización @5 24
C03 17  X  FRE  @0 Extraction forme @5 25
C03 17  X  ENG  @0 Pattern extraction @5 25
C03 17  X  SPA  @0 Extracción forma @5 25
C03 18  X  FRE  @0 Harmonique @5 26
C03 18  X  ENG  @0 Harmonic @5 26
C03 18  X  SPA  @0 Armónica @5 26
C03 19  X  FRE  @0 Temps occupation @5 27
C03 19  X  ENG  @0 Occupation time @5 27
C03 19  X  SPA  @0 Tiempo ocupación @5 27
C03 20  X  FRE  @0 Intervalle temps @5 28
C03 20  X  ENG  @0 Time interval @5 28
C03 20  X  SPA  @0 Intervalo tiempo @5 28
C03 21  X  FRE  @0 . @4 INC @5 82
N21       @1 344
N44 01      @1 OTO
N82       @1 OTO
pR  
A30 01  1  ENG  @1 NGITS 2006 @2 6 @3 Shefayim ISR @4 2006

Format Inist (serveur)

NO : PASCAL 07-0534512 INIST
ET : A data model for music information retrieval
AU : BERMAN (Tamar); ETZION (Opher); KUFLIK (Tsvi); MOTRO (Amihai)
AF : Graduate School of Library and Information Science University of Illinois at Urbana-Champaign/Champaign, IL 61820/Etats-Unis (1 aut.)
DT : Publication en série; Congrès; Niveau analytique
SO : Lecture notes in computer science; ISSN 0302-9743; Allemagne; Da. 2006; Vol. 4032; Pp. 165-173; Bibl. 15 ref.
LA : Anglais
EA : This paper describes a data model for the representation of tonal music. In this model, music is conceived as an equally-spaced time series of 12-dimensional vectors. The model has been successfully applied to the task of discovering frequently recurring patterns, and to the related task of retrieving user-defined musical patterns. This was accomplished by converting midi sequences of music by W.A. Mozart into the time series representation and analyzing these with data mining tools and SQL queries. The novelty of the pattern extraction capability supported by the model is in the potentially complex description of the sequences, which may contain both melodic and harmonic features, may be embedded within each other, or interspersed with other patterns or occurrences. A unique feature of the model is the use of time intervals as the basic representational unit, which fosters possibilities for future application to audio data.
CC : 001D02B07D; 001D02B07B; 001B40C38
FD : Système information; Recherche information; Série temporelle; Découverte connaissance; Analyse donnée; Extraction information; Fouille donnée; Interrogation base donnée; Acoustique audio; Acoustique musicale; Tonie; Musique; Comportement utilisateur; SQL; Modèle donnée; Modélisation; Extraction forme; Harmonique; Temps occupation; Intervalle temps; .
ED : Information system; Information retrieval; Time series; Knowledge discovery; Data analysis; Information extraction; Data mining; Database query; Audio acoustics; Musical acoustics; Pitch(acoustics); Music; User behavior; SQL; Data models; Modeling; Pattern extraction; Harmonic; Occupation time; Time interval
SD : Sistema información; Búsqueda información; Serie temporal; Descubrimiento conocimiento; Análisis datos; Extracción información; Busca dato; Interrogación base datos; Acústica musical; Altura sonida; Música; Comportamiento usuario; Modelización; Extracción forma; Armónica; Tiempo ocupación; Intervalo tiempo
LO : INIST-16343.354000153620160150
ID : 07-0534512

Links to Exploration step

Pascal:07-0534512

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">A data model for music information retrieval</title>
<author>
<name sortKey="Berman, Tamar" sort="Berman, Tamar" uniqKey="Berman T" first="Tamar" last="Berman">Tamar Berman</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Graduate School of Library and Information Science University of Illinois at Urbana-Champaign</s1>
<s2>Champaign, IL 61820</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">07-0534512</idno>
<date when="2006">2006</date>
<idno type="stanalyst">PASCAL 07-0534512 INIST</idno>
<idno type="RBID">Pascal:07-0534512</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000123</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">A data model for music information retrieval</title>
<author>
<name sortKey="Berman, Tamar" sort="Berman, Tamar" uniqKey="Berman T" first="Tamar" last="Berman">Tamar Berman</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Graduate School of Library and Information Science University of Illinois at Urbana-Champaign</s1>
<s2>Champaign, IL 61820</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">Lecture notes in computer science</title>
<idno type="ISSN">0302-9743</idno>
<imprint>
<date when="2006">2006</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">Lecture notes in computer science</title>
<idno type="ISSN">0302-9743</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Audio acoustics</term>
<term>Data analysis</term>
<term>Data mining</term>
<term>Data models</term>
<term>Database query</term>
<term>Harmonic</term>
<term>Information extraction</term>
<term>Information retrieval</term>
<term>Information system</term>
<term>Knowledge discovery</term>
<term>Modeling</term>
<term>Music</term>
<term>Musical acoustics</term>
<term>Occupation time</term>
<term>Pattern extraction</term>
<term>Pitch(acoustics)</term>
<term>SQL</term>
<term>Time interval</term>
<term>Time series</term>
<term>User behavior</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Système information</term>
<term>Recherche information</term>
<term>Série temporelle</term>
<term>Découverte connaissance</term>
<term>Analyse donnée</term>
<term>Extraction information</term>
<term>Fouille donnée</term>
<term>Interrogation base donnée</term>
<term>Acoustique audio</term>
<term>Acoustique musicale</term>
<term>Tonie</term>
<term>Musique</term>
<term>Comportement utilisateur</term>
<term>SQL</term>
<term>Modèle donnée</term>
<term>Modélisation</term>
<term>Extraction forme</term>
<term>Harmonique</term>
<term>Temps occupation</term>
<term>Intervalle temps</term>
<term>.</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">This paper describes a data model for the representation of tonal music. In this model, music is conceived as an equally-spaced time series of 12-dimensional vectors. The model has been successfully applied to the task of discovering frequently recurring patterns, and to the related task of retrieving user-defined musical patterns. This was accomplished by converting midi sequences of music by W.A. Mozart into the time series representation and analyzing these with data mining tools and SQL queries. The novelty of the pattern extraction capability supported by the model is in the potentially complex description of the sequences, which may contain both melodic and harmonic features, may be embedded within each other, or interspersed with other patterns or occurrences. A unique feature of the model is the use of time intervals as the basic representational unit, which fosters possibilities for future application to audio data.</div>
</front>
</TEI>
<inist>
<standard h6="B">
<pA>
<fA01 i1="01" i2="1">
<s0>0302-9743</s0>
</fA01>
<fA05>
<s2>4032</s2>
</fA05>
<fA08 i1="01" i2="1" l="ENG">
<s1>A data model for music information retrieval</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG">
<s1>Next generation information technologies and systems : 6th international conference, NGITS 2006, Kibbutz Shefayim, Israel, July 4-6, 2006 : proceedings</s1>
</fA09>
<fA11 i1="01" i2="1">
<s1>BERMAN (Tamar)</s1>
</fA11>
<fA12 i1="01" i2="1">
<s1>ETZION (Opher)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="02" i2="1">
<s1>KUFLIK (Tsvi)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="03" i2="1">
<s1>MOTRO (Amihai)</s1>
<s9>ed.</s9>
</fA12>
<fA14 i1="01">
<s1>Graduate School of Library and Information Science University of Illinois at Urbana-Champaign</s1>
<s2>Champaign, IL 61820</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</fA14>
<fA20>
<s1>165-173</s1>
</fA20>
<fA21>
<s1>2006</s1>
</fA21>
<fA23 i1="01">
<s0>ENG</s0>
</fA23>
<fA26 i1="01">
<s0>3-540-35472-7</s0>
</fA26>
<fA43 i1="01">
<s1>INIST</s1>
<s2>16343</s2>
<s5>354000153620160150</s5>
</fA43>
<fA44>
<s0>0000</s0>
<s1>© 2007 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45>
<s0>15 ref.</s0>
</fA45>
<fA47 i1="01" i2="1">
<s0>07-0534512</s0>
</fA47>
<fA60>
<s1>P</s1>
<s2>C</s2>
</fA60>
<fA61>
<s0>A</s0>
</fA61>
<fA64 i1="01" i2="1">
<s0>Lecture notes in computer science</s0>
</fA64>
<fA66 i1="01">
<s0>DEU</s0>
</fA66>
<fC01 i1="01" l="ENG">
<s0>This paper describes a data model for the representation of tonal music. In this model, music is conceived as an equally-spaced time series of 12-dimensional vectors. The model has been successfully applied to the task of discovering frequently recurring patterns, and to the related task of retrieving user-defined musical patterns. This was accomplished by converting midi sequences of music by W.A. Mozart into the time series representation and analyzing these with data mining tools and SQL queries. The novelty of the pattern extraction capability supported by the model is in the potentially complex description of the sequences, which may contain both melodic and harmonic features, may be embedded within each other, or interspersed with other patterns or occurrences. A unique feature of the model is the use of time intervals as the basic representational unit, which fosters possibilities for future application to audio data.</s0>
</fC01>
<fC02 i1="01" i2="X">
<s0>001D02B07D</s0>
</fC02>
<fC02 i1="02" i2="X">
<s0>001D02B07B</s0>
</fC02>
<fC02 i1="03" i2="X">
<s0>001B40C38</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE">
<s0>Système information</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG">
<s0>Information system</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA">
<s0>Sistema información</s0>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE">
<s0>Recherche information</s0>
<s5>06</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG">
<s0>Information retrieval</s0>
<s5>06</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA">
<s0>Búsqueda información</s0>
<s5>06</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE">
<s0>Série temporelle</s0>
<s5>07</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG">
<s0>Time series</s0>
<s5>07</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA">
<s0>Serie temporal</s0>
<s5>07</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE">
<s0>Découverte connaissance</s0>
<s5>08</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG">
<s0>Knowledge discovery</s0>
<s5>08</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA">
<s0>Descubrimiento conocimiento</s0>
<s5>08</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE">
<s0>Analyse donnée</s0>
<s5>09</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG">
<s0>Data analysis</s0>
<s5>09</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA">
<s0>Análisis datos</s0>
<s5>09</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE">
<s0>Extraction information</s0>
<s5>10</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG">
<s0>Information extraction</s0>
<s5>10</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA">
<s0>Extracción información</s0>
<s5>10</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE">
<s0>Fouille donnée</s0>
<s5>11</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG">
<s0>Data mining</s0>
<s5>11</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA">
<s0>Busca dato</s0>
<s5>11</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE">
<s0>Interrogation base donnée</s0>
<s5>12</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG">
<s0>Database query</s0>
<s5>12</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA">
<s0>Interrogación base datos</s0>
<s5>12</s5>
</fC03>
<fC03 i1="09" i2="3" l="FRE">
<s0>Acoustique audio</s0>
<s5>13</s5>
</fC03>
<fC03 i1="09" i2="3" l="ENG">
<s0>Audio acoustics</s0>
<s5>13</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE">
<s0>Acoustique musicale</s0>
<s5>18</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG">
<s0>Musical acoustics</s0>
<s5>18</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA">
<s0>Acústica musical</s0>
<s5>18</s5>
</fC03>
<fC03 i1="11" i2="X" l="FRE">
<s0>Tonie</s0>
<s5>19</s5>
</fC03>
<fC03 i1="11" i2="X" l="ENG">
<s0>Pitch(acoustics)</s0>
<s5>19</s5>
</fC03>
<fC03 i1="11" i2="X" l="SPA">
<s0>Altura sonida</s0>
<s5>19</s5>
</fC03>
<fC03 i1="12" i2="X" l="FRE">
<s0>Musique</s0>
<s5>20</s5>
</fC03>
<fC03 i1="12" i2="X" l="ENG">
<s0>Music</s0>
<s5>20</s5>
</fC03>
<fC03 i1="12" i2="X" l="SPA">
<s0>Música</s0>
<s5>20</s5>
</fC03>
<fC03 i1="13" i2="X" l="FRE">
<s0>Comportement utilisateur</s0>
<s5>21</s5>
</fC03>
<fC03 i1="13" i2="X" l="ENG">
<s0>User behavior</s0>
<s5>21</s5>
</fC03>
<fC03 i1="13" i2="X" l="SPA">
<s0>Comportamiento usuario</s0>
<s5>21</s5>
</fC03>
<fC03 i1="14" i2="3" l="FRE">
<s0>SQL</s0>
<s5>22</s5>
</fC03>
<fC03 i1="14" i2="3" l="ENG">
<s0>SQL</s0>
<s5>22</s5>
</fC03>
<fC03 i1="15" i2="3" l="FRE">
<s0>Modèle donnée</s0>
<s5>23</s5>
</fC03>
<fC03 i1="15" i2="3" l="ENG">
<s0>Data models</s0>
<s5>23</s5>
</fC03>
<fC03 i1="16" i2="X" l="FRE">
<s0>Modélisation</s0>
<s5>24</s5>
</fC03>
<fC03 i1="16" i2="X" l="ENG">
<s0>Modeling</s0>
<s5>24</s5>
</fC03>
<fC03 i1="16" i2="X" l="SPA">
<s0>Modelización</s0>
<s5>24</s5>
</fC03>
<fC03 i1="17" i2="X" l="FRE">
<s0>Extraction forme</s0>
<s5>25</s5>
</fC03>
<fC03 i1="17" i2="X" l="ENG">
<s0>Pattern extraction</s0>
<s5>25</s5>
</fC03>
<fC03 i1="17" i2="X" l="SPA">
<s0>Extracción forma</s0>
<s5>25</s5>
</fC03>
<fC03 i1="18" i2="X" l="FRE">
<s0>Harmonique</s0>
<s5>26</s5>
</fC03>
<fC03 i1="18" i2="X" l="ENG">
<s0>Harmonic</s0>
<s5>26</s5>
</fC03>
<fC03 i1="18" i2="X" l="SPA">
<s0>Armónica</s0>
<s5>26</s5>
</fC03>
<fC03 i1="19" i2="X" l="FRE">
<s0>Temps occupation</s0>
<s5>27</s5>
</fC03>
<fC03 i1="19" i2="X" l="ENG">
<s0>Occupation time</s0>
<s5>27</s5>
</fC03>
<fC03 i1="19" i2="X" l="SPA">
<s0>Tiempo ocupación</s0>
<s5>27</s5>
</fC03>
<fC03 i1="20" i2="X" l="FRE">
<s0>Intervalle temps</s0>
<s5>28</s5>
</fC03>
<fC03 i1="20" i2="X" l="ENG">
<s0>Time interval</s0>
<s5>28</s5>
</fC03>
<fC03 i1="20" i2="X" l="SPA">
<s0>Intervalo tiempo</s0>
<s5>28</s5>
</fC03>
<fC03 i1="21" i2="X" l="FRE">
<s0>.</s0>
<s4>INC</s4>
<s5>82</s5>
</fC03>
<fN21>
<s1>344</s1>
</fN21>
<fN44 i1="01">
<s1>OTO</s1>
</fN44>
<fN82>
<s1>OTO</s1>
</fN82>
</pA>
<pR>
<fA30 i1="01" i2="1" l="ENG">
<s1>NGITS 2006</s1>
<s2>6</s2>
<s3>Shefayim ISR</s3>
<s4>2006</s4>
</fA30>
</pR>
</standard>
<server>
<NO>PASCAL 07-0534512 INIST</NO>
<ET>A data model for music information retrieval</ET>
<AU>BERMAN (Tamar); ETZION (Opher); KUFLIK (Tsvi); MOTRO (Amihai)</AU>
<AF>Graduate School of Library and Information Science University of Illinois at Urbana-Champaign/Champaign, IL 61820/Etats-Unis (1 aut.)</AF>
<DT>Publication en série; Congrès; Niveau analytique</DT>
<SO>Lecture notes in computer science; ISSN 0302-9743; Allemagne; Da. 2006; Vol. 4032; Pp. 165-173; Bibl. 15 ref.</SO>
<LA>Anglais</LA>
<EA>This paper describes a data model for the representation of tonal music. In this model, music is conceived as an equally-spaced time series of 12-dimensional vectors. The model has been successfully applied to the task of discovering frequently recurring patterns, and to the related task of retrieving user-defined musical patterns. This was accomplished by converting midi sequences of music by W.A. Mozart into the time series representation and analyzing these with data mining tools and SQL queries. The novelty of the pattern extraction capability supported by the model is in the potentially complex description of the sequences, which may contain both melodic and harmonic features, may be embedded within each other, or interspersed with other patterns or occurrences. A unique feature of the model is the use of time intervals as the basic representational unit, which fosters possibilities for future application to audio data.</EA>
<CC>001D02B07D; 001D02B07B; 001B40C38</CC>
<FD>Système information; Recherche information; Série temporelle; Découverte connaissance; Analyse donnée; Extraction information; Fouille donnée; Interrogation base donnée; Acoustique audio; Acoustique musicale; Tonie; Musique; Comportement utilisateur; SQL; Modèle donnée; Modélisation; Extraction forme; Harmonique; Temps occupation; Intervalle temps; .</FD>
<ED>Information system; Information retrieval; Time series; Knowledge discovery; Data analysis; Information extraction; Data mining; Database query; Audio acoustics; Musical acoustics; Pitch(acoustics); Music; User behavior; SQL; Data models; Modeling; Pattern extraction; Harmonic; Occupation time; Time interval</ED>
<SD>Sistema información; Búsqueda información; Serie temporal; Descubrimiento conocimiento; Análisis datos; Extracción información; Busca dato; Interrogación base datos; Acústica musical; Altura sonida; Música; Comportamiento usuario; Modelización; Extracción forma; Armónica; Tiempo ocupación; Intervalo tiempo</SD>
<LO>INIST-16343.354000153620160150</LO>
<ID>07-0534512</ID>
</server>
</inist>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/explor/MozartV1/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000123 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 000123 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    MozartV1
   |flux=    PascalFrancis
   |étape=   Corpus
   |type=    RBID
   |clé=     Pascal:07-0534512
   |texte=   A data model for music information retrieval
}}

Wicri

This area was generated with Dilib version V0.6.20.
Data generation: Sun Apr 10 15:06:14 2016. Site generation: Tue Feb 7 15:40:35 2023